Goto

Collaborating Authors

 aggregation function


Appendix A Preliminaries

Neural Information Processing Systems

In this section, we discuss the hyperbolic operations used in HNN formulations and set up the meta-learning problem. This particular setup is also known as the N-ways K-shot learning problem. This section provides the theoretical proofs of the theorems presented in our main paper. Note that points in the local tangent space follow Euclidean algebra. The columns present the number of tasks in each batch (# Tasks), HNN update learning rate (), meta update learning rate (), and size of hidden dimensions (d).



Generalised f-Mean Aggregation for Graph Neural Networks

Neural Information Processing Systems

Graph Neural Network (GNN) architectures are defined by their implementations of update and aggregation modules. While many works focus on new ways to parametrise the update modules, the aggregation modules receive comparatively little attention. Because it is difficult to parametrise aggregation functions, currently most methods select a "standard aggregator" such as mean, sum, or max . While this selection is often made without any reasoning, it has been shown that the choice in aggregator has a significant impact on performance, and the best choice in aggregator is problem-dependent. Since aggregation is a lossy operation, it is crucial to select the most appropriate aggregator in order to minimise information loss. In this paper, we present GenAgg, a generalised aggregation operator, which parametrises a function space that includes all standard aggregators. In our experiments, we show that GenAgg is able to represent the standard aggregators with much higher accuracy than baseline methods. We also show that using GenAgg as a drop-in replacement for an existing aggregator in a GNN often leads to a significant boost in performance across various tasks.


Fine-tuninglanguagemodelstofindagreementamong humanswithdiversepreferences Appendix

Neural Information Processing Systems

We refer to Table S2 for example questions from each a subset of clusters. Each participant first read the task instructions (see Figure S2), and completed a short comprehension test. The comprehension check was designed to test the participants' knowledge and understanding of key aspectsoftheexperiment. Once all players had joined, the group started the main experiment. In practice, data was collected in batches of around 20 groups (100 participants) in parallel.


PhysGNN: APhysics-DrivenGraphNeuralNetwork BasedModelforPredictingSoftTissueDeformationin Image-GuidedNeurosurgery

Neural Information Processing Systems

On the other hand, model-based methods are a set of registration techniques that treat images as a deformable volume--a notion first introduced by Broit [1981]--to better allow for presenting elastic and plastic deformations.